valuable comment
We sincerely thank all three reviewers for their valuable comments, with the following being our responses
We sincerely thank all three reviewers for their valuable comments, with the following being our responses. As such, although the weights in Eq. (2) are learned to be'static', the enriched The sample sizes for all the models are fixed as 4096. R1) Regarding the advance of the proposed model. We will include such experiments in our revised paper. R1) Regarding missing related work.
We thank all reviewers for their valuable comments, such as the novelty, well-motivated objective and promising results
We thank all reviewers for their valuable comments, such as the novelty, well-motivated objective and promising results. The code "ICP-pytorch" has been anonymously released on GitHub. We rebut key issues point-by-point as below. We sincerely hope R#2 to raise the score. We rebut these concerns below, and will clarify related issues in our paper.
Table 1: Classification accuracies and F1 scores in percentiles under the imbalanced setting
Thanks for the valuable comments and questions. 1) We understand the reviewer's concern that the ratio of Besides, there are various methods specially for data imbalance to alleviate the issues. Flawfinder and a commercial tool CXXX which we hide the name for legal concern. Static analyzers tend to miss most vulnerable functions and have high false positives, e.g., Cppcheck found 0 One important note is that [19] didn't To verify it, we tested trained models with different sizes of the combined dataset, i.e., 1/3, 2/3 As shown in Table 2, both accuracy and F1 increases as the data volume increases.
We thank the referees for their interest in our paper and for their valuable comments that help us to make the paper 1 clearer
We analyzed the multi-layer case beyond what is reported in the submitted paper. Equations to get the optimal error in the multi-layer case are in page 10-11 of the SM. The vertical lines show the PCA and the optimal threshold respectively. Our claims of optimality of AMP are indeed limited to the cases investigated numerically. We will make a statement collecting all the assumptions in the final version.
We thank all the reviewers for their valuable comments
We thank all the reviewers for their valuable comments. We would like to clarify that, 'When the model was trained without the mel-spectrogram loss, the training process We also think that applying the L1/L2 loss gives no disadvantage in one-to-one mapping as our work. We will clarify the details of the experiments in Section 3. Table 1: Mean Opinion Scores. All models were trained up to 500k steps. MOS evaluation results are shown in [Table 1].